home *** CD-ROM | disk | FTP | other *** search
- Path: goanna.cs.rmit.EDU.AU!not-for-mail
- From: ok@goanna.cs.rmit.EDU.AU (Richard A. O'Keefe)
- Newsgroups: comp.lang.ada,comp.lang.c++,comp.lang.c,comp.lang.modula3,comp.lang.modula2
- Subject: Re: Hungarian notation - whoops!
- Date: 1 Mar 1996 20:57:12 +1100
- Organization: Comp Sci, RMIT, Melbourne, Australia
- Message-ID: <4h6hlo$hqu@goanna.cs.rmit.EDU.AU>
- References: <30C40F77.53B5@swsbbs.com> <4fms62$c0p@goanna.cs.rmit.EDU.AU> <4ft1ruINN6dr@keats.ugrad.cs.ubc.ca> <4g9255$74s@goanna.cs.rmit.EDU.AU> <4gip1iINNjd@keats.ugrad.cs.ubc.ca>
- NNTP-Posting-Host: goanna.cs.rmit.edu.au
- X-Newsreader: NN version 6.5.0 #0 (NOV)
-
- > >I wrote:
- > > But the fact that abs(x) may deliver a negative
- > > number is something I have to live with the whole time.
-
- c2a192@ugrad.cs.ubc.ca (Kazimir Kylheku) writes:
- >It does so for the largest negative value. This is documented. Unfortunately,
- >as you claim, you do have to check that yourself. And of course, to know what
- >the largest negative value is, you can use the standard #defined manifest
- >constants. It's not like abs() returns a negative value for any old random
- >input. The fact of the matter is that the additive inverse of the largest
- >negative value under two's complement simply can't be represented in a two's
- >complement word of the same size, yet the return value from abs() is a signed
- >quantity. Shrug.
-
- "shrug"?
- Every program I write either (a) gets significantly more complex, making it
- more costly to write, inspect, and debug, or (b) incurs risk of error, and
- you say "shrug"?
-
- Why should I read anything else you write, if our attitudes are so different?
-
- As for the rest, which I did read, Kazimir keeps on trying to instruct me
- about C. But I ****know**** about C. I know a LOT about C. I don't need
- to be instructed about it. My problem isn't ignorance, it's dislike.
-
- >For _one_ legal input. :)
-
- One legal input is one LEGAL input.
-
- I am concerned about writing reliable maintainable software at affordable
- cost. I am only interested in hardware, languages, compilers, and so on
- as a means to that end. I am NOT interested in wasting my time fighting
- around rough edges that do not serve MY ends.
-
- >I didn't say that it wasn't. But the Ada compiler has to also do a little
- >``code explosion'' to ensure the same portability as your C with programmer
- >inserted masking.
-
- Yes, but the COMPILER does it! _I_ don't have to design, code, inspect, test,
- debug, document, maintain, &c the code that the compiler generates. I don't
- _care_ what the compiler does. What I care about is that my source code is
- as clean and straightforward as possible. Surely I cannot be alone in this,
- or Ada would not exist.
-
- > >Overflows aren't the problem. Restricted machine arithmetic is the problem.
-
- >Not much you can do about the machine, as a programmer and supporting
- >multi-precision arithmetic in any language imposes a lot of overhead and code
- >explosion.
-
- (a) I didn't ask for multi-precision arithmetic.
- I meant "restricted" in an entirely different sense.
- (b) It is simply false that supporting multi-precision arithmetic imposes
- a LOT of overhead or code explosion. Bad implementations may have
- such overheads, but it is bad implementation that imposes them, not
- high precision arithmetic. If you mean that supporting numbers up to
- 18 decimal digits (as required by the COBOL standard) on a 32-bit-only
- machine is slower than using confining yourself to what the machine can
- do in a single cycle, then yes, but so what? What counts is overhead
- *relative to other ways of getting RIGHT ANSWERS*. (Note: on a number
- of modern machines a subroutine call is a single instruction, so if we
- compare
- load r1, X
- load r2, Y
- add r1, r1, r2
- store r1, Z
- with
- load r1, @X
- load r2, @Y
- call,delayed add_64
- load r3, @Z
- we find no difference whatsoever in the code size. Yes, it's slower,
- but it gets the right answer.
-
-
- Look, let's drop this. As far as I was concerned, the real issue is
- that
-
- hardware and software exist to help us solve real-world problems
- they should be co-designed to minimise the cost of reliable
- correct computing.
-
- There is no need to educate me about C or present CPUs. C was, and in its
- essence still is, a language which is designed to make it easy for an
- expert programmer to control a machine. There is certainly room for such
- languages, even assembly code has its niche. My complaint is not about
- 2s-complement as such, it is about programming languages that let it show
- through in the form of giving wrong answers. It is NOT my job to make my
- program complicated to compensate for weak compilers. That is my complaint.
-
- >BTW do you have some sort of specific gripe with two's complement
- >representations of integer arithmetic?
-
- I already explained my gripe in detail. The "extra" value may simplify
- life for computer architects. But it _doesn't_ simplify life for
- programmers. Example:
-
- int like_atoi(char *p) {
- int sign = 1;
- int n = 0;
-
- if (*p == '+') p++; else
- if (*p == '-') p++, sign = -1;
-
- while (isdigit(*p)) n = n*10 + (*p++ - '0');
- return n*sign;
- }
-
- If you are on a machine where signed arithmetic overflows are signalled
- (MIPS), why does this have to be written
-
- int like_atoi(char *p) {
- int sign = -1;
- int n = 0;
-
- if (*p == '+') p++; else
- if (*p == '-') p++, sign = 1;
-
- while (isdigit(*p)) n = n*10 - (*p++ - '0');
- return n*sign;
- }
-
- instead? In sign-and-magnitude or ones-complement arithmetic, both versions
- do exactly the same thing. In overflow-checked twos-complement, you have to
- use the second version. I've encountered quite a number of programs that
- get it wrong. You have to be aware of that extra value the whole time, and
- it doesn't solve any application problems.
-
- --
- Election time; but how to get Labor _out_ without letting Liberal _in_?
- Richard A. O'Keefe; http://www.cs.rmit.edu.au/~ok; RMIT Comp.Sci.
-